165 research outputs found
Scaled unscented transform Gaussian sum filter: theory and application
In this work we consider the state estimation problem in
nonlinear/non-Gaussian systems. We introduce a framework, called the scaled
unscented transform Gaussian sum filter (SUT-GSF), which combines two ideas:
the scaled unscented Kalman filter (SUKF) based on the concept of scaled
unscented transform (SUT), and the Gaussian mixture model (GMM). The SUT is
used to approximate the mean and covariance of a Gaussian random variable which
is transformed by a nonlinear function, while the GMM is adopted to approximate
the probability density function (pdf) of a random variable through a set of
Gaussian distributions. With these two tools, a framework can be set up to
assimilate nonlinear systems in a recursive way. Within this framework, one can
treat a nonlinear stochastic system as a mixture model of a set of sub-systems,
each of which takes the form of a nonlinear system driven by a known Gaussian
random process. Then, for each sub-system, one applies the SUKF to estimate the
mean and covariance of the underlying Gaussian random variable transformed by
the nonlinear governing equations of the sub-system. Incorporating the
estimations of the sub-systems into the GMM gives an explicit (approximate)
form of the pdf, which can be regarded as a "complete" solution to the state
estimation problem, as all of the statistical information of interest can be
obtained from the explicit form of the pdf ...
This work is on the construction of the Gaussian sum filter based on the
scaled unscented transform
A Bayesian Consistent Dual Ensemble Kalman Filter for State-Parameter Estimation in Subsurface Hydrology
Ensemble Kalman filtering (EnKF) is an efficient approach to addressing
uncertainties in subsurface groundwater models. The EnKF sequentially
integrates field data into simulation models to obtain a better
characterization of the model's state and parameters. These are generally
estimated following joint and dual filtering strategies, in which, at each
assimilation cycle, a forecast step by the model is followed by an update step
with incoming observations. The Joint-EnKF directly updates the augmented
state-parameter vector while the Dual-EnKF employs two separate filters, first
estimating the parameters and then estimating the state based on the updated
parameters. In this paper, we reverse the order of the forecast-update steps
following the one-step-ahead (OSA) smoothing formulation of the Bayesian
filtering problem, based on which we propose a new dual EnKF scheme, the
Dual-EnKF. Compared to the Dual-EnKF, this introduces a new update
step to the state in a fully consistent Bayesian framework, which is shown to
enhance the performance of the dual filtering approach without any significant
increase in the computational cost. Numerical experiments are conducted with a
two-dimensional synthetic groundwater aquifer model to assess the performance
and robustness of the proposed Dual-EnKF, and to evaluate its
results against those of the Joint- and Dual-EnKFs. The proposed scheme is able
to successfully recover both the hydraulic head and the aquifer conductivity,
further providing reliable estimates of their uncertainties. Compared with the
standard Joint- and Dual-EnKFs, the proposed scheme is found more robust to
different assimilation settings, such as the spatial and temporal distribution
of the observations, and the level of noise in the data. Based on our
experimental setups, it yields up to 25% more accurate state and parameters
estimates
Optimal projection of observations in a Bayesian setting
Optimal dimensionality reduction methods are proposed for the Bayesian
inference of a Gaussian linear model with additive noise in presence of
overabundant data. Three different optimal projections of the observations are
proposed based on information theory: the projection that minimizes the
Kullback-Leibler divergence between the posterior distributions of the original
and the projected models, the one that minimizes the expected Kullback-Leibler
divergence between the same distributions, and the one that maximizes the
mutual information between the parameter of interest and the projected
observations. The first two optimization problems are formulated as the
determination of an optimal subspace and therefore the solution is computed
using Riemannian optimization algorithms on the Grassmann manifold. Regarding
the maximization of the mutual information, it is shown that there exists an
optimal subspace that minimizes the entropy of the posterior distribution of
the reduced model; a basis of the subspace can be computed as the solution to a
generalized eigenvalue problem; an a priori error estimate on the mutual
information is available for this particular solution; and that the
dimensionality of the subspace to exactly conserve the mutual information
between the input and the output of the models is less than the number of
parameters to be inferred. Numerical applications to linear and nonlinear
models are used to assess the efficiency of the proposed approaches, and to
highlight their advantages compared to standard approaches based on the
principal component analysis of the observations
Coordinate Transformation and Polynomial Chaos for the Bayesian Inference of a Gaussian Process with Parametrized Prior Covariance Function
This paper addresses model dimensionality reduction for Bayesian inference
based on prior Gaussian fields with uncertainty in the covariance function
hyper-parameters. The dimensionality reduction is traditionally achieved using
the Karhunen-\Loeve expansion of a prior Gaussian process assuming covariance
function with fixed hyper-parameters, despite the fact that these are uncertain
in nature. The posterior distribution of the Karhunen-Lo\`{e}ve coordinates is
then inferred using available observations. The resulting inferred field is
therefore dependent on the assumed hyper-parameters. Here, we seek to
efficiently estimate both the field and covariance hyper-parameters using
Bayesian inference. To this end, a generalized Karhunen-Lo\`{e}ve expansion is
derived using a coordinate transformation to account for the dependence with
respect to the covariance hyper-parameters. Polynomial Chaos expansions are
employed for the acceleration of the Bayesian inference using similar
coordinate transformations, enabling us to avoid expanding explicitly the
solution dependence on the uncertain hyper-parameters. We demonstrate the
feasibility of the proposed method on a transient diffusion equation by
inferring spatially-varying log-diffusivity fields from noisy data. The
inferred profiles were found closer to the true profiles when including the
hyper-parameters' uncertainty in the inference formulation.Comment: 34 pages, 17 figure
- …